Goto

Collaborating Authors

 deepfake content


Spreading AI-generated content could lead to expensive fines

Popular Science

AI-generated "deepfake" materials are flooding the internet, sometimes with dangerous results. In just the last year, AI has been used to make deceiving voice clones of a former US president and spread fake, politically-charged images depicting children in natural disasters. Nonconsensual, AI-generated sexual images and videos, meanwhile, are leaving a trail of trauma impacting everyone from high schoolers to Taylor Swift. Large tech companies like Microsoft and Meta have made some efforts to identify instances of AI manipulation but with only muted success. Now, governments are stepping in to try and stem the tide with something they know quite a bit about: fines.


Deepfake tweets automatic detection

Frej, Adam, Kaminski, Adrian, Marciniak, Piotr, Szmajdzinski, Szymon, Kuntur, Soveatin, Wroblewska, Anna

arXiv.org Artificial Intelligence

The rise of DeepFake technology in the digital era presents both opportunities and challenges, significantly impacting misinformation through realistic fake content creation, especially in social media tweets [18, 15]. The proliferation of DeepFakes poses a substantial threat to the integrity of information on social media platforms, where the rapid dissemination of false content can lead to widespread misinformation and public distrust. Addressing this issue is critical for maintaining the reliability of digital communications and ensuring that users can distinguish between authentic and manipulated content. Our study leverages natural language processing (NLP) to develop a DeepFake tweet detection framework, aiming to bolster social media information reliability and pave the way for further research in ensuring digital authenticity. By focusing on the linguistic and contextual nuances that differentiate genuine tweets from AI-generated ones, we seek to create a robust detection mechanism that can be integrated into existing social media platforms to mitigate the spread of misinformation. Focusing on detecting DeepFake content in tweets, this research employs the TweepFake dataset to evaluate various text representation and preprocessing methods. The TweepFake dataset provides a diverse and comprehensive collection of tweets that facilitate the training and testing of different detection models. We explore effective embeddings and model-This work was funded by the European Union under the Horizon Europe grant OMINO (grant no 101086321) and by the Polish Ministry of Education and Science within the framework of the program titled International Projects Co-Financed.


AI deepfakes are endangering democracy. Here are 4 ways to fight back

FOX News

With the recent explosion of AI, dazzling images, videos, audio and texts can now be easily generated by anyone with just a few simple inputs. While this technology offers many astonishing benefits, it also poses significant dangers. Among the most pernicious of these is the creation of deepfakes – highly realistic yet manipulated or fabricated content that falsely depicts real people doing or saying things they never did. Our ability to discern fact from fiction, along with democracy itself, are in the crosshairs. In recent months, deepfakes have entered the mainstream like never before.


How Deepfake Works: Unmasking the Technology with Applications and 7 Ways to Detect

#artificialintelligence

These days we can see tons of images and videos of celebrities, such as A video in which Tom Cruise is talking about Politics and Trump is talking about Hollywood movies. Before diving into how this is possible, let's talk about What is Deepfake and How do deepfakes work? Deepfake is a portmanteau of "deep learning" and "fake". It means using artificial intelligence (AI) techniques to make fake audio, video, or image content that looks and sounds real. Machine learning algorithms are used to make these very realistic fakes.


Deep Insights of Deepfake Technology : A Review

Mahmud, Bahar Uddin, Sharmin, Afsana

arXiv.org Artificial Intelligence

Under the aegis of computer vision and deep learning technology, a new emerging techniques has introduced that anyone can make highly realistic but fake videos, images even can manipulates the voices. This technology is widely known as Deepfake Technology. Although it seems interesting techniques to make fake videos or image of something or some individuals but it could spread as misinformation via internet. Deepfake contents could be dangerous for individuals as well as for our communities, organizations, countries religions etc. As Deepfake content creation involve a high level expertise with combination of several algorithms of deep learning, it seems almost real and genuine and difficult to differentiate. In this paper, a wide range of articles have been examined to understand Deepfake technology more extensively. We have examined several articles to find some insights such as what is Deepfake, who are responsible for this, is there any benefits of Deepfake and what are the challenges of this technology. We have also examined several creation and detection techniques. Our study revealed that although Deepfake is a threat to our societies, proper measures and strict regulations could prevent this.


Deepfake: Curbing A Prolific Phenomenon

#artificialintelligence

The threat of deepfake content has become a prevalent issue since 2017, where a user started a viral phenomenon by combining machine learning software and AI to create inappropriate content with the faces of famous celebrities. Utilising a form of artificial intelligence called deep learning to manipulate and produce falsified pieces of content, deepfakes are the 21st century's answer to Photoshop. As the technology continues to develop and spread, deepfakes have started becoming a concern of the public. The World Intellectual Property Organisation states that deepfakes can cause problems such as violation of human rights, right of privacy and personal data protection rights. With this technology being relatively new, the public has not yet acquainted itself to the dangers of this technology.


Detecting Deepfake Video Calls Through Monitor Illumination

#artificialintelligence

A new collaboration between a researcher from the United States' National Security Agency (NSA) and the University of California at Berkeley offers a novel method for detecting deepfake content in a live video context – by observing the effect of monitor lighting on the appearance of the person at the other end of the video call. Popular DeepFaceLive user Druuzil Tech & Games tries out his own Christian Bale DeepFaceLab model in a live session with his followers, while lighting sources change. The system works by placing a graphic element on the user's screen that changes a narrow range of its color faster than a typical deepfake system can respond – even if, like real-time deepfake streaming implementation DeepFaceLive (pictured above), it has some capability of maintaining live color transfer, and accounting for ambient lighting. The uniform color image displayed on the monitor of the person at the other end (i.e. the potential deepfake fraudster) cycles through a limited variation of hue-changes that are designed not to activate a webcam's automatic white balance and other ad hoc illumination compensation systems, which would compromise the method. From the paper, an illustration of change in lighting conditions from the monitor in front of a user, which effectively operates as a diffuse'area light'.


Identifying Celebrity Deepfakes From Outer Face Regions

#artificialintelligence

A new collaboration between Microsoft and a Chinese university has proposed a novel way of identifying celebrity deepfakes, by leveraging the shortcomings of current deepfake techniques to recognize identities that have been'projected' onto other people. The approach is called Identity Consistency Transformer (ICT), and works by comparing the outermost parts of the face (jaw, cheekbones, hairline, and other outer marginal lineaments) to the interior of the face. The system exploits commonly available public image data of famous people, which limits its effectiveness to popular celebrities, whose images are available in high numbers in widely available computer vision datasets, and on the internet. The forgery coverage of faked faces across seven techniques: DeepFake in FF; DeepFake in Google DeepFake Detection; DeepFaceLab; Face2Face; FSGAN; and DF-VAE. Popular packages such as DeepFaceLab and FaceSwap provide similarly constrained coverage.


Deepfakes for Good

#artificialintelligence

In this technology-driven era, it is not uncommon to see fake news and propaganda spreading like wildfire. To make matters worse, the advancement in artificial intelligence has created deepfake, a new technology emerging to be one of the most common causes for nefarious activities. Deepfake employs artificial intelligence to create fake audios, videos and pictures that seem pretty authentic. This technology is mainly used for nefarious purposes such as defamation, revenge porn, and election propaganda. In recent years, thousands of deepfake videos targeting actors, actresses and political leaders have created havoc.


Deepfakes: It's Growing Dangers and How to Spot Them

#artificialintelligence

Artificial intelligence has become omnipresent in our lives: It offers recommendations about what to purchase straight away, suggests films, gives us insights on traffic patterns, and even customizes advertisements on the web. The most recent expansion to everyday interactions with AI is deepfakes, which are hyper-realistic AI-generated images and videos. In spite of the fact that they're not genuine -- "fake" is in the name, all things considered -- individuals can struggle to recognize the deepfakes from authentic pictures. Likewise, the number of profound learning applications for the field of AI-produced pictures and videos is growing. The present AI-generated images are regularly utilized for aesthetic purposes.